150 research outputs found

    Machine intelligence sports as research programs

    Get PDF
    Games and competitions have played a significant role throughout the history of artificial intelligence and robotics. Machine intelligence games are examined here from a distinctive methodological perspective, focusing on their use as generators of multidisciplinary research programs. In particular, Robocup is analyzed as an exemplary case of contemporary research program developing from machine intelligence games. These research programs arising are schematized in terms of framework building, subgoaling, and outcome appraisal processes. The latter process is found to involve a rather intricate system of rewards and penalties, which take into account the double allegiance of participating scientists, trading and sharing interchanges taking place in a multidisciplinary research environment, in addition to expected industrial payoffs and a variety of other fringe research benefits in the way of research outreach and results dissemination, recruitment of junior researchers and students enrollment

    Autonomia delle macchine e filosofia dell'intelligenza artificiale

    Get PDF
    Philosophical motives of interest for AI and robotic autonomous systems prominently stem from distinctive ethical concerns: in which circumstances are autonomous systems ought to be permitted or prohibited to perform tasks which have significant implications in the way of human responsibilities, moral duties or fundamental rights? Deontological and consequentialist approaches to ethical theorizing are brought to bear on these ethical issues in the context afforded by the case studies of autonomous vehicles and autonomous weapons. Local solutions to intertheoretic conflicts concerning these case studies are advanced towards the development of a more comprehensive ethical platform guiding the design and use of autonomous machinery

    The AI carbon footprint and responsibilities of AI scientists

    Get PDF
    This article examines ethical implications of the growing AI carbon footprint, focusing on the fair distribution of prospective responsibilities among groups of involved actors. First, major groups of involved actors are identified, including AI scientists, AI industry, and AI infrastructure providers, from datacenters to electrical energy suppliers. Second, responsibilities of AI scientists concerning climate warming mitigation actions are disentangled from responsibilities of other involved actors. Third, to implement these responsibilities nudging interventions are suggested, leveraging on AI competitive games which would prize research combining better system accuracy with greater computational and energy efficiency. Finally, in addition to the AI carbon footprint, it is argued that another ethical issue with a genuinely global dimension is now emerging in the AI ethics agenda. This issue concerns the threats that AI-powered cyberweapons pose to the digital command, control, and communication infrastructure of nuclear weapons systems

    Nuclear Weapons and the Militarization of AI

    Get PDF
    This contribution provides an overview of nuclear risks emerging from the militarization of AI technologies and systems. These include AI enhancements of cyber threats to nuclear command, control and communication infrastructures, proposed uses of AI systems affected by inherent vulnerabilities in nuclear early warning, AI-powered unmanned vessels trailing submarines armed with nuclear ballistic missiles. Taken together, nuclear risks emerging from the militarization of AI add new significant motives for nuclear non-proliferation and disarmament

    Explaining Engineered Computing Systems’ Behaviour: the Role of Abstraction and Idealization

    Get PDF
    This paper addresses the methodological problem of analysing what it is to explain observed behaviours of engineered computing systems (BECS), focusing on the crucial role that abstraction and idealization play in explanations of both correct and incorrect BECS. First, it is argued that an understanding of explanatory requests about observed miscomputations crucially involves reference to the rich background afforded by hierarchies of functional specifications. Second, many explanations concerning incorrect BECS are found to abstract away (and profitably so on account of both relevance and intelligibility of the explanans) from descriptions of physical components and processes of computing systems that one finds below the logic circuit and gate layer of functional specification hierarchies. Third, model-based explanations of both correct and incorrect BECS that are provided in the framework of formal verification methods often involve idealizations. Moreover, a distinction between restrictive and permissive idealizations is introduced and their roles in BECS explanations are analysed

    The Human Control Over Autonomous Robotic Systems: What Ethical and Legal Lessons for Judicial Uses of AI?

    Get PDF
    This contribution provides an overview of normative problems posed by increasingly autonomous robotic systems, with the goal of drawing significant lessons for the use of AI technologies in judicial proceedings, especially focusing on the shared control relationship between the human decision-maker (i.e. the judge) and the software system. The exemplary case studies that we zoom in concern two ethically and legally sensitive application domains for robotics: autonomous weapons systems and increasingly autonomous surgical robots. The first case study is expedient to delve into the normative acceptability issue concerning autonomous decision-making and action by robots. The second case study is used to investigate the human responsibility issue in human-robot shared control regimes. The convergent implications of both case studies for the analysis of ethical and legal issues raised by judicial applications of AI enable one to highlight the need for and core contents of a genuinely meaningful human control to be exerted on the operational autonomy, if any, of AI systems in judicial proceedings

    Toward a normative model of Meaningful Human Control over weapons systems

    Get PDF
    The notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems

    Le implicazioni etico-giuridiche delle nuove tecnologie robotiche ed informatiche in campo militare tra lex lata e lex ferenda

    Get PDF
    This contribution briefly overviews attempts by the international community to address the ethical and legal implications of new military technologies, with a focus on three technologies that have been radically redrawing the contours of warfare: armed drones, cyber weapons, and autonomous weapons systems. Notably, it will try to provide some insights into whether the issues raised by new military technologies can be properly governed by existing law (lex lata) or these technologies are so disruptive to require the adoption of new principles and rules (lex ferenda). In this respect, it will be argued that, while adherence to the lex lata undoubtedly provides a sound starting point, changes in the law may sometimes be needed to cope with problems that are simply newcomers in the international scene. It will conclusively observe that the elaboration of creative and apt solutions to these problems, by building on the legal heritage of the past, is one of the most important challenges ahead for the international community

    Middle-Level Features for the Explanation of Classification Systems by Sparse Dictionary Methods.

    Get PDF
    Machine learning (ML) systems are affected by a pervasive lack of transparency. The eXplainable Artificial Intelligence (XAI) research area addresses this problem and the related issue of explaining the behavior of ML systems in terms that are understandable to human beings. In many explanation of XAI approaches, the output of ML systems are explained in terms of low-level features of their inputs. However, these approaches leave a substantive explanatory burden with human users, insofar as the latter are required to map low-level properties into more salient and readily understandable parts of the input. To alleviate this cognitive burden, an alternative model-agnostic framework is proposed here. This framework is instantiated to address explanation problems in the context of ML image classification systems, without relying on pixel relevance maps and other low-level features of the input. More specifically, one obtains sets of middle-level properties of classification inputs that are perceptually salient by applying sparse dictionary learning techniques. These middle-level properties are used as building blocks for explanations of image classifications. The achieved explanations are parsimonious, for their reliance on a limited set of middle-level image properties. And they can be contrastive, because the set of middle-level image properties can be used to explain why the system advanced the proposed classification over other antagonist classifications. In view of its model-agnostic character, the proposed framework is adaptable to a variety of other ML systems and explanation problems
    • …
    corecore